Adding Benchmarking Capabilities to PICML: User Guide
نویسنده
چکیده
Platform Independent Component Modeling Language (or PICML, as it will be called hereafter) is a graphical modeling language for the building blocks of component applications, that is, applications that are built on top of component middleware. It isn’t intended to represent, or conform to the specification of, any particular kind of middleware or any particular vendor’s product. Rather, it’s an environment in which to design and modify component applications in a way that’s technology-neutral, in a way that can be translated into the middleware flavor of your choice. One such translator has already been written, and is described in detail at the end of this document. More are in the works. PICML itself was designed using the Generic Modeling Environment (GME), a powerful modeling tool developed at the Institute for Software Integrated Systems (ISIS) at Vanderbilt University. Models in PICML are also constructed using GME (I told you it was powerful), since PICML itself can be registered with an installed copy of GME and then selected from a list of modeling languages when starting a new model project. Documentation, download and other information can be found on the GME web page at: http://www.isis.vanderbilt. edu/Projects/gme/ This document pertains to adding benchmarking capabilities to PICML and discusses benchmark generation from higher level models. Using the benchmarking code, metrics such as roundtrip latency and throughput can be measured. This tool is designed as a helper tool and interacts with other tools such as OCML (Options Configuration Modeling Language) and PICML (deployment plan evaluation). These tools are a part of a larger tool-suite called Component Synthesis with Model Integrated Computing CoSMIC. This document does not deal with the motivation for the development of this capability, rather describes a step-wise process of adding and composing simple benchmarks using PICML models.
منابع مشابه
Practical benchmarking in DEA using artificial DMUs
Data envelopment analysis (DEA) is one of the most efficient tools for efficiency measurement which can be employed as a benchmarking method with multiple inputs and outputs. However, DEA does not provide any suggestions for improving efficient units, nor does it provide any benchmark or reference point for these efficient units. Impracticability of these benchmarks under environmental conditio...
متن کاملGeoGebra Tools with Proof Capabilities
We report about significant enhancements of the complex algebraic geometry theorem proving subsystem in GeoGebra for automated proofs in Euclidean geometry, concerning the extension of numerous GeoGebra tools with proof capabilities. As a result, a number of elementary theorems can be proven by using GeoGebra’s intuitive user interface on various computer architectures including native Java and...
متن کاملParametric Studies in Eclipse with TAU and PerfExplorer
With support for C/C++, Fortran, MPI, OpenMP, and performance tools, the Eclipse integrated development environment (IDE) is a serious contender as a programming environment for parallel applications. There is interest in adding capabilities in Eclipse for conducting workflows where an application is executed under different scenaries and its outputs are processed. For instance, parametric stud...
متن کاملBenchmarking DHTs with Queries
The recent proliferation of decentralized distributed hash table (DHT) proposals suggests a need for DHT benchmarks, both to compare existing implementations and to guide future innovation. We argue that a DHT-based query engine provides a unified framework for describing workloads and faultloads, injecting them into a DHT, and recording and analyzing the observed system behavior. To illustrate...
متن کاملCloud Benchmarking For Maximising Performance of Scientific Applications
How can applications be deployed on the cloud to achieve maximum performance? This question is challenging to address with the availability of a wide variety of cloud Virtual Machines (VMs) with different performance capabilities. The research reported in this paper addresses the above question by proposing a six step benchmarking methodology in which a user provides a set of weights that indic...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2004